Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zstd bomb fix #937

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Zstd bomb fix #937

wants to merge 8 commits into from

Conversation

mzabaluev
Copy link
Collaborator

@mzabaluev mzabaluev commented Dec 10, 2024

Fixes #876 (but see outstanding issues below).

Dependencies

movementlabsxyz/aptos-core#111

Summary

  • Categories: protocol-units.

Use streamed decompression API to avoid potentially huge allocations
for blob data decompressed from zstd. The bcs decoder enforces the
length limit of 2^31 - 1 on byte arrays.

Changelog

  • Prevent unlimited memory allocations in decoding compressed blobs from the DA.

Testing

Added unit tests to movement-celestia-da-util to exercise compression and decompression,
with supercritical and valid lengths for compressed payload and the data field. The test creating legit data structures with super-long blobs is ignored in default runs because of the memory and processing requirements.

Outstanding issues

As commented in #876 (comment), there are four byte array fields in the blob data structure that can each be bloated up to 2 GiB.

bcs::from_bytes(decompressed.as_slice()).context("failed to deserialize blob")?;
// decompress the blob and deserialize the data with bcs
let decoder = zstd::Decoder::new(blob.data.as_slice())?;
let blob = bcs::from_reader(decoder).context("failed to deserialize blob")?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know how fill_buf would be called here? I wondering if this in fact going to raise an Error or whether it might just panic if the buffer size is restricted.

Copy link
Collaborator Author

@mzabaluev mzabaluev Dec 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I fully understand this question, and we didn't restrict the size of the internal buffer.
But you did bring to my attention that the slice also supports BufRead and we can take advantage of it, bypassing the intermediate buffer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, bcs would return an Error.

I have implemented a customizable limit on length of any variable-length fields in zefchain/bcs#12.

Use streamed decompression API to avoid potentially huge allocations
for blob data decompressed from zstd. The bcs decoder enforces the
length limit of 2^31 - 1 on byte arrays.
@mzabaluev mzabaluev marked this pull request as ready for review December 11, 2024 19:04
Avoid an intermediate buffer when serializing and compressing
for CelestiaBlob, plugging the encoder into the serializer.
The data slice is already BufRead, just use it directly.
Use a crafted zstd payload as submitted in
#876 (comment)
Need this to enable length limit enforcement
on deserialization of blobs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

(OTS) Zstd Bomb Blob
2 participants